1,616 research outputs found

    Statistical Issues in Modeling Chronic Disease in Cohort Studies

    Get PDF
    The final publication (Cook, R. J., & Lawless, J. F. (2014). Statistical issues in modeling chronic disease in cohort studies. Statistics in Biosciences, 6(1), 127-161. DOI: 10.1007/s12561-013-9087-8) is available at Springer via http://link.springer.com/article/10.1007/s12561-013-9087-8Observational cohort studies of individuals with chronic disease provide information on rates of disease progression, the effect of fixed and time-varying risk factors, and the extent of heterogeneity in the course of disease. Analysis of this information is often facilitated by the use of multistate models with intensity functions governing transition between disease states. We discuss modeling and analysis issues for such models when individuals are observed intermittently. Frameworks for dealing with heterogeneity and measurement error are discussed including random effect models, finite mixture models, and hidden Markov models. Cohorts are often defined by convenience and ways of addressing outcome-dependent sampling or observation of individuals are also discussed. Data on progression of joint damage in psoriatic arthritis and retinopathy in diabetes are analysed to illustrate these issues and related methodology.Natural Sciences and Engineering Research Council of Canada (RGPIN 155849); Canadian Institutes for Health Research (FRN 13887

    Cumulative processes related to event histories

    Get PDF
    Costs or benefits which accumulate for individuals over time are of interest in many life history processes. Familiar examples include costs of health care for persons with chronic medical conditions, the payments to insured persons during periods of disability, and quality of life which is sometimes used in the evaluation of treatments in terminally ill patients. For convenience, here we use the term costs to refer to cost or other cumulative measures. Two important scenarios are (i) where costs are associated with the occurrence of certain events, so that total cost accumulates as a step function, and (ii) where individuals may move between various states over time, with cost accumulating at a constant rate determined by the state occupied. In both cases, there is frequently a random variable T that represents the duration of the process generating the costs. Here we consider estimation of the mean cumulative cost over a period of interest using methods based upon marginal features of the cost process and intensity based models. Robustness to adaptive censoring is discussed in the context of the multi-state methods. Data from a quality of life study of breast cancer patients are used to illustrate the methods.Peer Reviewe

    Cumulative processes related to event histories

    Get PDF
    Costs or benefits which accumulate for individuals over time are of interest in many life history processes. Familiar examples include costs of health care for persons with chronic medical conditions, the payments to insured persons during periods of disability, and quality of life which is sometimes used in the evaluation of treatments in terminally ill patients. For convenience, here we use the term costs to refer to cost or other cumulative measures. Two important scenarios are (i) where costs are associated with the occurrence of certain events, so that total cost accumulates as a step function, and (ii) where individuals may move between various states over time, with cost accumulating at a constant rate determined by the state occupied. In both cases, there is frequently a random variable T that represents the duration of the process generating the costs. Here we consider estimation of the mean cumulative cost over a period of interest using methods based upon marginal features of the cost process and intensity based models. Robustness to adaptive censoring is discussed in the context of the multi-state methods. Data from a quality of life study of breast cancer patients are used to illustrate the methods

    Estimating and testing direct genetic effects in directed acyclic graphs using estimating equations

    Get PDF
    In genetic association studies, it is important to distinguish direct and indirect genetic effects in order to build truly functional models. For this purpose, we consider a directed acyclic graph setting with genetic variants, primary and intermediate phenotypes, and confounding factors. In order to make valid statistical inference on direct genetic effects on the primary phenotype, it is necessary to consider all potential effects in the graph, and we propose to use the estimating equations method with robust Huber-White sandwich standard errors. We evaluate the proposed causal inference based on estimating equations (CIEE) method and compare it with traditional multiple regression methods, the structural equation modeling method, and sequential G-estimation methods through a simulation study for the analysis of (completely observed) quantitative traits and time-to-event traits subject to censoring as primary phenotypes. The results show that CIEE provides valid estimators and inference by successfully removing the effect of intermediate phenotypes from the primary phenotype and is robust against measured and unmeasured confounding of the indirect effect through observed factors. All other methods except the sequential G-estimation method for quantitative traits fail in some scenarios where their test statistics yield inflated type I errors. In the analysis of the Genetic Analysis Workshop 19 dataset, we estimate and test genetic effects on blood pressure accounting for intermediate gene expression phenotypes. The results show that CIEE can identify genetic variants that would be missed by traditional regression analyses. CIEE is computationally fast, widely applicable to different fields, and available as an R package

    A new perspective on loss to follow-up in failure time and life history studies

    Get PDF
    This is the peer reviewed version of the following article: Jerald F. Lawless and Richard J. Cook, A new perspective on loss to follow-up in failure time and life history studies. Statistics in Medicine (2019), 38(23): 4583–4610 which has been published in final form at https://doi.org/10.1002/sim.8318.A framework is proposed for the joint modeling of life history and loss to follow-up (LTF) processes in cohort studies. This framework provides a basis for discussing independence conditions for LTF and censoring and examining the implications of dependent LTF.We consider failure time and more general life history processes. The joint models are based on multistate processes with expanded state spaces encompassing both the life history and LTF processes. Tracing studies are discussed as a means of investigating the presence of dependent censoring and providing valid estimates of transition intensities and state occupancy probabilities. Simulation studies and an illustration based on a cohort of individuals with systemic lupus erythematosus demonstrate the usefulness and properties of the proposed methods.Funding was provided by the Natural Sciences and Engineering Research Council of Canada (RGPIN 8597 for JFL; RGPIN 155849 and RGPIN 04207 for RJC) and the Canadian Institutes of Health Research (FRN 13887 for RJC)

    Power-Law Adjusted Survival Models

    Full text link

    Bayes linear kinematics in the analysis of failure rates and failure time distributions

    Get PDF
    Collections of related Poisson or binomial counts arise, for example, from a number of different failures in similar machines or neighbouring time periods. A conventional Bayesian analysis requires a rather indirect prior specification and intensive numerical methods for posterior evaluations. An alternative approach using Bayes linear kinematics in which simple conjugate specifications for individual counts are linked through a Bayes linear belief structure is presented. Intensive numerical methods are not required. The use of transformations of the binomial and Poisson parameters is proposed. The approach is illustrated in two examples, one involving a Poisson count of failures, the other involving a binomial count in an analysis of failure times

    A hazard model of the probability of medical school dropout in the United Kingdom

    Get PDF
    From individual level longitudinal data for two entire cohorts of medical students in UK universities, we use multilevel models to analyse the probability that an individual student will drop out of medical school. We find that academic preparedness—both in terms of previous subjects studied and levels of attainment therein—is the major influence on withdrawal by medical students. Additionally, males and more mature students are more likely to withdraw than females or younger students respectively. We find evidence that the factors influencing the decision to transfer course differ from those affecting the decision to drop out for other reasons

    Predicting the Occurrence of Variants in RAG1 and RAG2

    Get PDF
    Abstract: While widespread genome sequencing ushers in a new era of preventive medicine, the tools for predictive genomics are still lacking. Time and resource limitations mean that human diseases remain uncharacterized because of an inability to predict clinically relevant genetic variants. A strategy of targeting highly conserved protein regions is used commonly in functional studies. However, this benefit is lost for rare diseases where the attributable genes are mostly conserved. An immunological disorder exemplifying this challenge occurs through damaging mutations in RAG1 and RAG2 which presents at an early age with a distinct phenotype of life-threatening immunodeficiency or autoimmunity. Many tools exist for variant pathogenicity prediction, but these cannot account for the probability of variant occurrence. Here, we present a method that predicts the likelihood of mutation for every amino acid residue in the RAG1 and RAG2 proteins. Population genetics data from approximately 146,000 individuals was used for rare variant analysis. Forty-four known pathogenic variants reported in patients and recombination activity measurements from 110 RAG1/2 mutants were used to validate calculated scores. Probabilities were compared with 98 currently known human cases of disease. A genome sequence dataset of 558 patients who have primary immunodeficiency but that are negative for RAG deficiency were also used as validation controls. We compared the difference between mutation likelihood and pathogenicity prediction. Our method builds a map of most probable mutations allowing pre-emptive functional analysis. This method may be applied to other diseases with hopes of improving preparedness for clinical diagnosis
    • …
    corecore